Modern Control Paradigms:

Lecture 1: Notion of Dynamical system, State space models, Control System and Implementation Issues

Modeling of Dynamical Systems

During our course we will discuss control methods over dynamical systems, our task can be summarized as follows: to design a control algorithms that will cause the controlled object to perform the desired behavior even in the presence of possible disturbances

But before we will dig in to details let us recall what is the main components of control system are:

drawing

From the scheme above we may distinguish following:

In this course we will mainly focus on design of advanced controllers, however to facilitate this we mast ensure that we have a clear picture of the object we are going to control.

So today we will try to:

While doing so we will have two assumptions (at least for today) namely plant actuators are ideal and supply the control inputs to plant without any distortions. The full state of plant is measurable without any noise (sensors are ideal as well)

Dynamical System

A dynamical system is a system whose behaviour, indicated by its output
signal, evolves over time, possibly under the influence of external inputs.

Examples of dynamical systems include:

Whatever objects with some quantities changing in time can be viewed as dynamical system

A dynamical system may be

Plant Models

Mathematical model is the abstraction of real world. While one building model of process there are a lot of simplifications may be made, and even seemingly accurate models are never perfectly describe the underlying process. A model should be as simple as possible, and no simpler.

drawing

Anything in the physical or biological world, whether natural or involving technology, is subject to analysis by mathematical models if it can be described in terms of mathematical expressions.

For the purpose of control design, a suitable model of the system should be used.
Different kinds of models used in control design are:

At present, mostly state-space models are used due to their generality, simplicity of implementation and mature mathematical apparatus that simplifies their analysis. However, impulse responce and transfer function may be usefull as well. For instance if one is interested in input output relationships, or som terminal properties of a system, impulse
response models or transfer function descriptions can be used.

However in this course we will stick to the state space models, since they are widely used in modern control systems.

State-space models

These models are based on the concept of the state of the system.
The state x\mathbf{x} of a system is the smallest set of variables (called state variables)
such that the knowledge of these variables x0\mathbf{x}_0 at some time t0t_0 together with
the knowledge of the input u(τ)\mathbf{u}(\tau) for all τ\tau from t0t_0 to tt completely
determines the behavior of the system for any time t>t0t> t_0 .

A state-space model of a system is expressed with a set of first-order
differential equations, one for each state variable.

Linear Systems

Linear control theory has been predominantly concerned with the study of linear timeinvariant (LTI) control systems, of the form:

x˙=A(t)x+B(t)u\begin{matrix} \dot{\mathbf{x}} = \mathbf{A}(t) \mathbf{x} + \mathbf{B}(t) \mathbf{u} \end{matrix}

with xRn\mathbf{x} \in \mathbb{R}^n being a vector of states, ARn×n\mathbf{A} \in \mathbb{R}^{n \times n} the system matrix, uRm\mathbf{u}\in \mathbb{R}^m input (control) vector and BRn×m\mathbf{B} \in \mathbb{R}^{n \times m} is the input matrix.

LTI systems have quite simple properties, such as:

Example: Mass-Spring-Damper

drawing

And one can formulate this system in state space as:
x˙=Ax=[y˙y¨]=[01kmbm][yy˙]\dot{\mathbf{x}} = \mathbf{A}\mathbf{x} = \begin{bmatrix} \dot{y}\\ \ddot{y} \end{bmatrix}= \begin{bmatrix} 0 & 1\\ -\frac{k}{m} & -\frac{b}{m} \end{bmatrix} \begin{bmatrix} y\\ \dot{y} \end{bmatrix}

Nonlinear Systems

Physical systems are inherently nonlinear. Thus, all control systems are nonlinear to a
certain extent.
Nonlinear control systems can be described by nonlinear differential equations.

x˙=f(x,u,d,t)\begin{matrix} \dot{\mathbf{x}} = \mathbf{f}(\mathbf{x},\mathbf{u},\mathbf{d}, t) \end{matrix}

where fRn\mathbf{f} \in \mathbb{R}^n is some nonlinear smooth function.

Nonlinear systems in contrast to linear:

The form above is fairly general, however there are known special cases, like control-affine and drift-less systems which we will study a bit later.

Example: Nonlinear Pendulum

drawing

Given state x=[θ,θ˙]T\mathbf{x} = [\theta, \dot{\theta}]^T we may formulate equation above as:

x˙=[θ˙θ¨]=[x˙1x˙2]=[x21mL2+I(umgLsinx1bx2)]\dot{\mathbf{x}} = \begin{bmatrix} \dot{\theta} \\ \ddot{\theta} \end{bmatrix} = \begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \end{bmatrix} = \begin{bmatrix} x_2 \\ \frac{1}{m L^2 + I}(u - mgL \sin x_1-b x_2) \end{bmatrix}

Example: Variable Mass Lander

drawing

Noting that mm is vary we need to include it to state, thus x=[x,x˙,m]T\mathbf{x} = [x, \dot{x}, m]^T and equation above is equalient to:

x˙=[x˙v˙m˙]=[vgkmuu]=[x˙1x˙2x˙3]=[x2gkx3uu]\dot{\mathbf{x}}= \begin{bmatrix} \dot{x} \\ \dot{v} \\ \dot{m} \\ \end{bmatrix}= \begin{bmatrix} v \\ -g - \frac{k}{m} u\\ u \\ \end{bmatrix} = \begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \\ \end{bmatrix} = \begin{bmatrix} x_2 \\ -g - \frac{k}{x_3} u\\ u \\ \end{bmatrix}

Generilized Mechanical System

Equation of motion for most mechanical systems may be written in following form:

M(q)q¨+h(q,q˙)+d(q,q˙,t)=Q=B(q)u\mathbf{M}(\mathbf{q})\ddot{\mathbf{q}} + \mathbf{h}(\mathbf{q},\dot{\mathbf{q}}) + \mathbf{d}(\mathbf{q},\dot{\mathbf{q}},t) = \mathbf{Q} = \mathbf{B}(\mathbf{q})\mathbf{u}

where:

One can easily transform the mechanical system to the state space form by defining the state x=[q,q˙]T\mathbf{x} = [\mathbf{q}, \dot{\mathbf{q}}]^T:

x˙=[x˙1x˙2]=[q˙q¨]=[x2M1(x1)(B(x1)ud(x1,x2,t)h(x1,x2))]\dot{\mathbf{x}}= \begin{bmatrix} \dot{\mathbf{x}}_1 \\ \dot{\mathbf{x}}_2 \end{bmatrix}= \begin{bmatrix} \dot{\mathbf{q}} \\ \ddot{\mathbf{q}} \end{bmatrix}= \begin{bmatrix} \mathbf{x}_2 \\ \mathbf{M}^{-1}(\mathbf{x}_1) \big(\mathbf{B}(\mathbf{x}_1)\mathbf{u} - \mathbf{d}(\mathbf{x}_1, \mathbf{x}_2,t) - \mathbf{h}(\mathbf{x}_1, \mathbf{x}_2) \big) \end{bmatrix}

Example: Floating Rigid Body

The model of floating rigid body described by its postion p\mathbf{p}, linear v=p˙\mathbf{v} = \dot{\mathbf{p}} and angular velocity ω\boldsymbol{\omega} subject to external force f\mathbf{f} and torque τ\boldsymbol{\tau}:

drawing

To rewrite the following in the general form one may define following:
M(q)=[mI00I],h(q,q˙)=[mgω×Iω],B=I\mathbf{M}(\mathbf{q}) = \begin{bmatrix} mI & 0 \\ 0 & \mathcal{I}\\ \end{bmatrix}, \quad \mathbf{h}(\mathbf{q},\dot{\mathbf{q}}) = \begin{bmatrix} mg \\ \boldsymbol{\omega} \times \mathcal{I}\boldsymbol{\omega}\\ \end{bmatrix}, \quad \mathbf{B} = \mathbf{I}

Example: Artificial Satellite

The artificial satellite orbiting planet may be described using Newton gravity theory as:

drawing

Given the generilized coordinates as q=[r,θ]\mathbf{q} = [r, \theta] and control u=[ur,uθ]T\mathbf{u} = [u_r, u_\theta]^T one can express above in the general form:

M(q)=[m00mr],h(q,q˙)=[mrθ˙2+GmMr22mr˙θ˙],B=I\mathbf{M}(\mathbf{q}) = \begin{bmatrix} m & 0 \\ 0 & mr \\ \end{bmatrix}, \quad \mathbf{h}(\mathbf{q},\dot{\mathbf{q}}) = \begin{bmatrix} -mr\dot{\theta}^2 + G\cfrac{m M}{r^2} \\ 2m\dot{r}\dot{\theta}\\ \end{bmatrix}, \quad \mathbf{B} = \mathbf{I}

Example: Cart Pole

Let us consider the cart pole described by:

drawing

Defyning the generilized coordinates as q=[r,θ]\mathbf{q} = [r, \theta] and matching the terms yields:

M(q)=[M+mmLcosθmLcosθmL2],h(q,q˙)=[mLθ˙2sinθmLgsinθ],B=[01]\mathbf{M}(\mathbf{q}) = \begin{bmatrix} M + m & -mL \cos \theta \\ -mL \cos \theta & mL^2 \\ \end{bmatrix}, \quad \mathbf{h}(\mathbf{q},\dot{\mathbf{q}}) = \begin{bmatrix} mL\dot{\theta }^{2}\sin \theta\\ -mLg\sin \theta\\ \end{bmatrix}, \quad \mathbf{B} = \begin{bmatrix} 0 \\ 1 \\ \end{bmatrix}

Note that the choice of a set of state variables for a system is not unique. So
a state-space model for a system is also not unique. There can be infinitely many possibilities.

When the model is obtained from a known differential equation, you can
always select meaningful state variables. However, state variables may not
directly relate to physical variables if the model is obtained from
identification methods.

Other Models

It should be noted that there are special cases when the equations above maybe simplified or some interesting properties may be exploited, examples are:

We will consider some of these later on.

Discrete-Time Systems

Some dynamical systems are described in discrete-times (which can be
counted) rather than in continuous time. e.g., system representing a bank
account whose balance is reported once everyday.

Discrete systems are described by difference equations:

xk+1=Akxk+Bkukxk+1=fd(xk,uk,dk,k)\begin{matrix} \mathbf{x}_{k+1} = \mathbf{A}_k \mathbf{x}_k + \mathbf{B}_k \mathbf{u}_k\\ \mathbf{x}_{k+1} = \mathbf{f}_d(\mathbf{x}_k,\mathbf{u}_k,\mathbf{d}_k, k)\\ \end{matrix}

Some systems are inherently discrete, whereas many continuous-time
systems are modeled in discrete-time for easier analysis and design with
digital computers.

Discretization and Simulation

In the field of robotics the most models are actually derived from continues differential equations while the controller is implemented in digital form (software). Thus the overall control system may be described as follows:

So for the purpose of control and analysis, it always useful to obtain a model of the
system in discrete times. Obtaining a discrete-time model is referred to as
discretization.

For a LTI system, we can obtain a exact discrete-time model if
the input remains constant over each sampling period.

Approximated discretization

Exact discretization of time-varying/nonlinear systems are difficult or may not be analytically possible.

Approximate discrete-time models are widely used in practice. For a small
sampling time, TT, we can write, for the nonlinear system x˙=f(x,u,d,t)\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x},\mathbf{u},\mathbf{d}, t):

xk+1=xk+Tf(x(kT),u(kT),d(kT),kT)\begin{matrix} \mathbf{x}_{k+1} =\mathbf{x}_{k} + T\mathbf{f}\big(\mathbf{x}(kT),\mathbf{u}(kT),\mathbf{d}(kT), kT\big)\\ \end{matrix}

And for a linear system, this becomes
xk+1=(I+TA(kT))xk+TB(kT)uk\mathbf{x}_{k+1} = (\mathbf{I} + T\mathbf{A}(kT))\mathbf{x}_k + T\mathbf{B}(kT)\mathbf{u}_k

This is called discretization by Euler method.

Example:

Consider the nonlinear pendulum described by following state space representation (assuming all parameters are 11):

x˙=[x˙1x˙2]=[x2usinx1x2]\dot{\mathbf{x}} = \begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \end{bmatrix} = \begin{bmatrix} x_2 \\ u - \sin x_1 - x_2 \end{bmatrix}

The discrete model is then given by following set of difference equations:
xk+1=[x1k+1x2k+1]=[x1kx2k]+T[x2kusinx1kx2k]=[x1k+Tx2kx2k+TukTsinx1kTx2k]\mathbf{x}_{k+1} = \begin{bmatrix} x_{1_{k+1}} \\ x_{2_{k+1}} \end{bmatrix} = \begin{bmatrix} x_{1_{k}} \\ x_{2_{k}} \end{bmatrix} + T \begin{bmatrix} x_{2_k} \\ u - \sin x_{1_k} - x_{2_k} \end{bmatrix} = \begin{bmatrix} x_{1_k} + Tx_{2_k} \\ x_{2_k} + Tu_k - T\sin x_{1_k} - Tx_{2_k} \end{bmatrix}

Simulation of ODE

While studying ODE x˙=f(x,u,t)\dot{\mathbf{x}} = \boldsymbol{f}(\mathbf{x}, \mathbf{u}, t), one is often interested in its solution x(t)\mathbf{x}(t) (integral curve):
x(t)=t0tf(τ,x(τ),u(τ))dτ,s.t: x(t0)=x0\mathbf{x}(t) = \int_{t_0}^{t} \boldsymbol{f}(\tau,\mathbf{x}(\tau),\mathbf{u}(\tau))d\tau,\quad \text{s.t: } \mathbf{x}(t_0) = \mathbf{x}_0
The simulation is nothing but tacking the integral above.

However, in most practical situations the above cannot be solved analytically and one should consider numerical integration instead, thus ending up with descrete system:

xk+1=fd(xk,uk,dk,k),s.t: x0=x(t0)\mathbf{x}_{k+1} = \mathbf{f}_d(\mathbf{x}_k,\mathbf{u}_k,\mathbf{d}_k, k),\quad \text{s.t: } \mathbf{x}_0 = \mathbf{x}(t_0)

Thus simulation is just iteration over the descrete dynamics starting from initial point x0=x(t0)\mathbf{x}_0 = \mathbf{x}(t_0)

Let us implement the simulation of nonlinear pendulum via iterating the discrete dynamics:

import numpy as np 

def f(state, t, control):
    u = control 
    x1, x2 = state 
    dx1 = x2 
    dx2 = u - np.sin(x1) - 0*x2
    return np.array([dx1, dx2])

x_0 = np.array([1,0])
T = 2E-2
tf = 10
N = int(tf/T)
X = []


# ITERATE DESCRETE DYNAMICS
x_prev = x_0
for k in range(N):
    X.append(x_prev)
    u_k = 0
    x_new = x_prev + T*f(x_prev, k*T, u_k)
    x_prev = x_new 

x_sol_simp = np.array(X)

Let us plot the result:

from matplotlib.pyplot import *

plot(x_sol_simp, linewidth=2.0)
grid(color='black', linestyle='--', linewidth=1.0, alpha = 0.7)
grid(True)
# xlim([t0, tf])
ylabel(r'State $x_k$')
xlabel(r'Samples $k$')
show()

png

The Euler method implemented above is highly dependent on the sampling period TT, there are other suitable methods, the most widely used is the 4-th order Runge-Kutta and advanced variational integrators. However, we will not dig into the integration algorithm, instead for the purpose of simulation we will use odeint from the scipy.integrate:

from scipy.integrate import odeint # import integrator routine
scale = 5

X = []
x_prev = x_0
for k in range(N):
    X.append(x_prev)
    t_k = np.linspace(k*T, (k+1)*T, scale)
    u_k = 0
    x_new = odeint(f, x_prev, t_k, args = (u_k,))
    x_prev  = x_new[-1,:] 

x_sol = np.array(X)
plot(x_sol, linewidth=2.0)
plot(x_sol_simp, linewidth=2.0)
grid(color='black', linestyle='--', linewidth=1.0, alpha = 0.7)
grid(True)
ylabel(r'State $x_k$')
xlabel(r'Samples $k$')
show()

png

Controller and Implementation

How to make the given dynamical system display desired behavior? this is one of the questions of concern in the field of control theory. One of the most widely used approaches supporting the solution of the problems above is the so-called feedback control

drawing

Let us now assume that one have designed feedback law as follows:
u=φ(x)\mathbf{u} = \boldsymbol{\varphi}(\mathbf{x})

One may substitute control law and obtain the equations of the closed loop system:

x˙=f(x,u)=f(x,φ(x))=fc(x)\mathbf{\dot{x}} = \mathbf{f}(\mathbf{x},\mathbf{u}) =\mathbf{f}(\mathbf{x},\boldsymbol{\varphi}(\mathbf{x})) = \mathbf{f}_c(\mathbf{x})

now this system is unforced and may be analyzed as it is no control at all - basically we have changed the overall nature of plant - the governing dynamics.

In the next lecture we will learn how to design the controller functions and analyze closed loop response with different numerical and analytical tools, while for today we will move to the practice when we will focus on the implementation side of the control system.